5 - Strong vs. Narrow AI [ID:21724]
50 von 91 angezeigt

Right. There's a couple of general blah blah topics that I still want to get out of the

way. One of them is a very important classification we make in AI, which is we classify kind of

the methods we are using into narrow AI, which means we're trying to compute or make behavior

on machines that is human-like for a restricted application. Strong AI is something which where

we say, oh yes, and we're going to do whatever a human can do, we're going to try and do

that as well. Those are totally different motivations. And the hype about AI has a lot

to do with misunderstanding these two agendas. Strong AI is what Hollywood thinks AI is or

should be. Think about movies like I, Robot, or this new movie with this AI girl and so

on. And they all have the same script essentially, right? Some crook or villain makes AI and they

have the breakthrough in strong AI and then they threaten to take over the world but there's

a hero who can stop it at the last minute. Okay? Just whole succession of films. But

for that to work, you need strong AI. You cannot really make a movie saying, oh, and

there's this wonderful program by this evil empire who can play Go better than any human.

Oh God. Okay? Playing Go is a typical narrow AI thing. Understanding language and doing

dictation or doing Alexa is a narrow AI thing. Okay? Almost all computer scientists think

narrow AI or applied AI or something like this is what we should do. Because it's a

goal we can actually make progress on easily. There will be some math involved and prologue

of course. But we can actually do this and we've shown that we can do this. We've beat

the chess champion. We have self driving cars on the road where last week's news was that

the Waymo cars have clocked 10 million miles and in spring an Uber car killed the first

pedestrian. And all of those things, all of those are narrow AI. Okay? Think about, where

is he? This guy here. He can play Go very well. But he can also do other things. I'm

pretty sure he's a decent cook of Asian food. And he reads bedtime stories and sings lullabies

to his kids. I don't know whether he has kids, but he could. Okay? And maybe he does philosophy.

Right? All of those things that humans can do. And then there's this one narrow AI thing

where, which is playing Go well or not so well. Okay? I would like you to keep this

strong versus narrow AI in mind. It's maybe the most important thing kind of in the talking

to the outside world that you can learn here. And by the way, of course, we're going to

sometimes discuss strong AI, but we're going to do narrow AI techniques. The interesting

thing is that advancing narrow AI almost never directly helps strong AI. And yes, you could

say, well, I'm going to build a Go player, a lullaby singer, an Asian food cook, philosophy

discusser, and all of those things that this guy, Lucidol, can do. And put them onto a

great cluster. And when you ask a Asian cooking question, then I'm going to make, route you

through to the AI cook. But it doesn't work that way. We can't make that work. Right?

Think about the picture. We have wonderful, narrow AI systems. And we kind of have a router

here, put a nice face on it, and we've solved AI. If you have something that says, oh, this

is for the Asian cook, that is a task that is what we call AI complete. To solve that

kind of router thing is just as hard as solving the full problem. And there are quite a few

problems that are AI complete. Essentially, to do, say, natural language, question answering

or so well at human level is as problematic, we believe, as doing strong AI. OK? Strong

AI or edge AI or something like this. That is actually what people are scared of. I'm

not at all scared. And if you are scared of AI taking over the world, which is kind of

what is sometimes called the singularity by people like Ray Kurzweil, who has founded

something called the Singularity Institute. Singularity being that at some point, strong

AI becomes so strong that it can get more intelligent by itself. OK? Scary. Because

then it takes off. And depending on whom you believe, there will be kind of biotopes for

humans where you can kind of, or zoos or so where they say, oh, look at those. And he

thinks, so he's been kind of talking about the singularity for 20 years maybe. And it's

always kind of 10 years away. Very convenient. I'm not worried about that at all. Not during

my lifetime. Probably not during, well, if you work hard, then maybe during your lifetime.

Teil eines Kapitels:
Artificial Intelligence – Who?, What?, When?, Where?, and Why?

Zugänglich über

Offener Zugang

Dauer

00:15:12 Min

Aufnahmedatum

2020-10-23

Hochgeladen am

2020-10-23 14:26:53

Sprache

en-US

Difference between narrow AI and strong AI. The explanation of AI completeness and some notions on AGI.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen